我们开发了一种贝叶斯方法,以预测从具有多通道(即多维张量)结构的多个来源收集的数据的连续或二元结果。作为一个激励示例,我们将来自多个'Omics源的分子数据考虑在多个发育时间点上测量,作为恒河猴模型中早期铁缺乏症(ID)的预测指标。我们在系数上使用具有低级别结构的线性模型来捕获多路依赖性,并在每个源分别对系数的方差进行建模以推断其相对贡献。共轭先验促进了有效的吉布斯采样算法以进行后推理,假设有正常误差的连续结果或具有概率链接的二元结果。模拟表明,我们的模型在错误分类速率和估计系数与真实系数的相关性方面的性能如预期的,在考虑到不同来源的不同信号大小时,通过合并多路结构和适度的增长,可以通过稳定的性能增长。此外,它为我们的激励应用提供了可靠的ID猴子分类。以R代码形式的软件可在https://github.com/biostatskim/bayesmsmw上获得。
translated by 谷歌翻译
Remote sensing imagery provides comprehensive views of the Earth, where different sensors collect complementary data at different spatial scales. Large, pretrained models are commonly finetuned with imagery that is heavily augmented to mimic different conditions and scales, with the resulting models used for various tasks with imagery from a range of spatial scales. Such models overlook scale-specific information in the data. In this paper, we present Scale-MAE, a pretraining method that explicitly learns relationships between data at different, known scales throughout the pretraining process. Scale-MAE pretrains a network by masking an input image at a known input scale, where the area of the Earth covered by the image determines the scale of the ViT positional encoding, not the image resolution. Scale-MAE encodes the masked image with a standard ViT backbone, and then decodes the masked image through a bandpass filter to reconstruct low/high frequency images at lower/higher scales. We find that tasking the network with reconstructing both low/high frequency images leads to robust multiscale representations for remote sensing imagery. Scale-MAE achieves an average of a $5.0\%$ non-parametric kNN classification improvement across eight remote sensing datasets compared to current state-of-the-art and obtains a $0.9$ mIoU to $3.8$ mIoU improvement on the SpaceNet building segmentation transfer task for a range of evaluation scales.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
Supervised machine learning-based medical image computing applications necessitate expert label curation, while unlabelled image data might be relatively abundant. Active learning methods aim to prioritise a subset of available image data for expert annotation, for label-efficient model training. We develop a controller neural network that measures priority of images in a sequence of batches, as in batch-mode active learning, for multi-class segmentation tasks. The controller is optimised by rewarding positive task-specific performance gain, within a Markov decision process (MDP) environment that also optimises the task predictor. In this work, the task predictor is a segmentation network. A meta-reinforcement learning algorithm is proposed with multiple MDPs, such that the pre-trained controller can be adapted to a new MDP that contains data from different institutes and/or requires segmentation of different organs or structures within the abdomen. We present experimental results using multiple CT datasets from more than one thousand patients, with segmentation tasks of nine different abdominal organs, to demonstrate the efficacy of the learnt prioritisation controller function and its cross-institute and cross-organ adaptability. We show that the proposed adaptable prioritisation metric yields converging segmentation accuracy for the novel class of kidney, unseen in training, using between approximately 40\% to 60\% of labels otherwise required with other heuristic or random prioritisation metrics. For clinical datasets of limited size, the proposed adaptable prioritisation offers a performance improvement of 22.6\% and 10.2\% in Dice score, for tasks of kidney and liver vessel segmentation, respectively, compared to random prioritisation and alternative active sampling strategies.
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
医学图像分割模型的性能指标用于衡量参考注释和预测之间的一致性。在开发此类模型中,使用了一组通用指标,以使结果更具可比性。但是,公共数据集中的分布与临床实践中遇到的案例之间存在不匹配。许多常见的指标无法衡量这种不匹配的影响,尤其是对于包含不确定,小或空参考注释的临床数据集。因此,可能无法通过此类指标来验证模型在临床上有意义的一致性。评估临床价值的维度包括独立于参考注释量的大小,考虑参考注释的不确定性,体积计和/或位置一致性的奖励以及对空参考注释正确分类的奖励。与普通的公共数据集不同,我们的内部数据集更具代表性。它包含不确定的,小或空的参考注释。我们研究了有关深度学习框架的预测的公开度量指标,以确定哪些设置共同指标可提供有意义的结果。我们将公共基准数据集进行比较而没有不确定,小或空参考注释。该代码将发布。
translated by 谷歌翻译
在这项研究中,将放射学方法扩展到用于组织分类的光学荧光分子成像数据,称为“验光”。荧光分子成像正在出现在头颈部鳞状细胞癌(HNSCC)切除期间的精确手术引导。然而,肿瘤到正常的组织对比与靶分子表皮生长因子受体(EGFR)的异质表达的内在生理局限性混淆。验光学试图通过探测荧光传达的EGFR表达中的质地模式差异来改善肿瘤识别。从荧光图像样品中提取了总共1,472个标准化的验光特征。涉及支持矢量机分类器的监督机器学习管道接受了25个顶级功能的培训,这些功能由最小冗余最大相关标准选择。通过将切除组织的图像贴片分类为组织学确认的恶性肿瘤状态,将模型预测性能与荧光强度阈值方法进行了比较。与荧光强度阈值方法相比,验光方法在所有测试集样品中提供了一致的预测准确性(无剂量)(平均精度为89%vs. 81%; P = 0.0072)。改进的性能表明,将放射线学方法扩展到荧光分子成像数据为荧光引导手术中的癌症检测提供了有希望的图像分析技术。
translated by 谷歌翻译
大气效应(例如湍流和背景热噪声)抑制了在开关键控自由空间光学通信中使用的相干光的传播。在这里,我们介绍并实验验证了卷积神经网络,以降低后处理中自由空间光学通信的位错误率,而自由空间光学通信的位比基于高级光学器件的现有解决方案明显简单,更便宜。我们的方法由两个神经网络组成,这是第一个确定在热噪声和湍流中存在相干位序列以及第二个解调相干位序列的存在。通过生成连贯的光线,将它们与热灯结合在一起,并通过湍流的水箱将其结合起来,通过生成开关的键入键流,可以通过实验获得我们网络的所有数据,从而获得了模拟的湍流,并将其传递给了最终的光线。高度准确性。我们的卷积神经网络提高了与阈值分类方案相比的检测准确性,并具有与当前解调和误差校正方案集成的能力。
translated by 谷歌翻译
现实世界的行为通常是由多种代理之间复杂的相互作用来塑造的。为了可靠地研究多代理行为,无监督和自我监督的学习的进步使从轨迹数据中学到了各种不同的行为表示。迄今为止,还没有一组统一的基准测试,可以在广泛的行为分析设置中进行定量和系统地比较方法。我们的目的是通过引入来自现实世界行为神经科学实验的大规模,多代理轨迹数据集来解决这一问题,该数据集涵盖了一系列行为分析任务。我们的数据集由来自通用模型生物的轨迹数据组成,其中有960万帧的小鼠数据和440万帧的飞行数据,在各种实验环境中,例如不同的菌株,相互作用的长度和光遗传学刺激。框架的子集还包括专家注销的行为标签。我们数据集的改进对应于跨多种生物的行为表示,并能够捕获常见行为分析任务的差异。
translated by 谷歌翻译
数据剪辑对于降低量化操作中的噪声和提高量化感知训练(QAT)的准确性至关重要。当前的实践依靠启发式方法来设置剪接阈值标量,不能证明是最佳的。我们提出了最佳的剪切张量和向量(octav),这是一种递归算法,以确定MSE最佳的剪切标量。 OCTAV源自Fast Newton-Raphson方法,在QAT例程的每一个迭代中,都可以随时发现最佳的剪切标量。因此,QAT算法在每个步骤中都具有可证明的最小量化噪声配制。此外,我们揭示了QAT中常见梯度估计技术的局限性,并提出了幅度感知的分化,以进一步提高准确性。在实验上,启用了八度的QAT在多个任务上实现了最先进的精度。其中包括在ImageNet上进行训练,并在ImageNet上进行重新注册和Mobilenets,以及使用BERT模型进行微调,其中启用八叶速度的QAT始终以低精度(4到6位)保持准确性。我们的结果不需要对基线训练配方进行任何修改,除了在适当的情况下插入量化操作。
translated by 谷歌翻译